2 research outputs found

    Robo-Identity: Exploring Artificial Identity and Emotion via Speech Interactions

    Get PDF
    Following the success of the first edition of Robo-Identity, the second edition will provide an opportunity to expand the discussion about artificial identity. This year, we are focusing on emotions that are expressed through speech and voice. Synthetic voices of robots can resemble and are becoming indistinguishable from expressive human voices. This can be an opportunity and a constraint in expressing emotional speech that can (falsely) convey a human-like identity that can mislead people, leading to ethical issues. How should we envision an agent's artificial identity? In what ways should we have robots that maintain a machine-like stance, e.g., through robotic speech, and should emotional expressions that are increasingly human-like be seen as design opportunities? These are not mutually exclusive concerns. As this discussion needs to be conducted in a multidisciplinary manner, we welcome perspectives on challenges and opportunities from variety of fields. For this year's edition, the special theme will be "speech, emotion and artificial identity''

    ASVspoof 2019: a large-scale public database of synthetized, converted and replayed speech

    No full text
    Automatic speaker verification (ASV) is one of the most natural and convenient means of biometric person recognition. Unfortunately, just like all other biometric systems, ASV is vulnerable to spoofing, also referred to as “presentation attacks.” These vulnerabilities are generally unacceptable and call for spoofing countermeasures or “presentation attack detection” systems. In addition to impersonation, ASV systems are vulnerable to replay, speech synthesis, and voice conversion attacks. The ASVspoof challenge initiative was created to foster research on anti-spoofing and to provide common platforms for the assessment and comparison of spoofing countermeasures. The first edition, ASVspoof 2015, focused upon the study of countermeasures for detecting of text-to-speech synthesis (TTS) and voice conversion (VC) attacks. The second edition, ASVspoof 2017, focused instead upon replay spoofing attacks and countermeasures. The ASVspoof 2019 edition is the first to consider all three spoofing attack types within a single challenge. While they originate from the same source database and same underlying protocol, they are explored in two specific use case scenarios. Spoofing attacks within a logical access (LA) scenario are generated with the latest speech synthesis and voice conversion technologies, including state-of-the-art neural acoustic and waveform model techniques. Replay spoofing attacks within a physical access (PA) scenario are generated through carefully controlled simulations that support much more revealing analysis than possible previously. Also new to the 2019 edition is the use of the tandem detection cost function metric, which reflects the impact of spoofing and countermeasures on the reliability of a fixed ASV system. This paper describes the database design, protocol, spoofing attack implementations, and baseline ASV and countermeasure results. It also describes a human assessment on spoofed data in logical access. It was demonstrated that the spoofing data in the ASVspoof 2019 database have varied degrees of perceived quality and similarity to the target speakers, including spoofed data that cannot be differentiated from bona fide utterances even by human subjects. It is expected that the ASVspoof 2019 database, with its varied coverage of different types of spoofing data, could further foster research on anti-spoofing.Peer reviewe
    corecore